In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
在胸部计算机断层扫描(CT)扫描中,自动分割地面玻璃的不透明和固结可以在高资源利用时期减轻放射科医生的负担。但是,由于分布(OOD)数据默默失败,深度学习模型在临床常规中不受信任。我们提出了一种轻巧的OOD检测方法,该方法利用特征空间中的Mahalanobis距离,并无缝集成到最新的分割管道中。简单的方法甚至可以增加具有临床相关的不确定性定量的预训练模型。我们在四个胸部CT分布偏移和两个磁共振成像应用中验证我们的方法,即海马和前列腺的分割。我们的结果表明,所提出的方法在所有探索场景中有效地检测到遥远和近型样品。
translated by 谷歌翻译
联邦学习是培训强大的深度学习模型的最有希望的胸廓CTS中Covid-19相关发现的细分。通过以分散的方式学习,异构数据可以从各种来源和采集协议中利用,同时确保患者隐私。然而,连续监测模型的性能是至关重要的。然而,当涉及弥漫性肺病变的分割时,快速的目视检查是不足以评估专家放射科医师对所有网络输出的质量,并且无法彻底监测。在这项工作中,我们呈现了一系列轻量级度量,可以在每个医院本地计算,然后聚合用于联合系统的中央监控。我们的线性模型检测到分布外数据集上超过70%的低质量分段,从而可靠地发出模型性能下降。
translated by 谷歌翻译
虽然对2D图像的零射击学习(ZSL)进行了许多研究,但其在3D数据中的应用仍然是最近且稀缺的,只有几种方法限于分类。我们在3D数据上介绍了ZSL和广义ZSL(GZSL)的第一代生成方法,可以处理分类,并且是第一次语义分割。我们表明它达到或胜过了INTEMNET40对归纳ZSL和归纳GZSL的ModelNet40分类的最新状态。对于语义分割,我们创建了三个基准,用于评估此新ZSL任务,使用S3DIS,Scannet和Semantickitti进行评估。我们的实验表明,我们的方法优于强大的基线,我们另外为此任务提出。
translated by 谷歌翻译
Semantic segmentation is a key problem for many computer vision tasks. While approaches based on convolutional neural networks constantly break new records on different benchmarks, generalizing well to diverse testing environments remains a major challenge. In numerous real world applications, there is indeed a large gap between data distributions in train and test domains, which results in severe performance loss at run-time. In this work, we address the task of unsupervised domain adaptation in semantic segmentation with losses based on the entropy of the pixel-wise predictions. To this end, we propose two novel, complementary methods using (i) an entropy loss and (ii) an adversarial loss respectively. We demonstrate state-of-theart performance in semantic segmentation on two challenging "synthetic-2-real" set-ups 1 and show that the approach can also be used for detection.
translated by 谷歌翻译